Sensitivity analysis (SA) is the study of how the variation (uncertainty) in the output of a statistical model can be attributed to different variations in the inputs of the model.[1] Put another way, it is a technique for systematically changing variables in a model to determine the effects of such changes.
In any budgeting process there are always variables that are uncertain. Future tax rates, interest rates, inflation rates, headcount, operating expenses and other variables may not be known with great precision. Sensitivity analysis answers the question, "if these variables deviate from expectations, what will the effect be (on the business, model, system, or whatever is being analyzed)?"
In more general terms uncertainty and sensitivity analysis investigate the robustness of a study when the study includes some form of statistical modelling. Sensitivity analysis can be useful to computer modelers for a range of purposes,[2] including:
Contents |
Statistical problems met in social, economic or natural sciences may entail the use of statistical models, which generally do not lend themselves to a straightforward understanding of the relationship between input factors (what goes into the model) and output (the model’s dependent variables). Such an appreciation, i.e. the understanding of how the model behaves in response to changes in its inputs, is of fundamental importance to ensure a correct use of the models.
A statistical model is defined by a series of equations, input factors, parameters, and variables aimed at characterizing the process being investigated.
Input is subject to many sources of uncertainty including errors of measurement, absence of information and poor or partial understanding of the driving forces and mechanisms. This uncertainty imposes a limit on our confidence in the response or output of the model. Further, models may have to cope with the natural intrinsic variability of the system, such as the occurrence of stochastic events.
Good modeling practice requires that the modeler provides an evaluation of the confidence in the model, possibly assessing the uncertainties associated with the modeling process and with the outcome of the model itself. Uncertainty and Sensitivity Analysis offer valid tools for characterizing the uncertainty associated with a model. Uncertainty analysis (UA) quantifies the uncertainty in the outcome of a model. Sensitivity Analysis has the complementary role of ordering by importance the strength and relevance of the inputs in determining the variation in the output.[1]
In models involving many input variables sensitivity analysis is an essential ingredient of model building and quality assurance. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission, the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency.
Sometimes a sensitivity analysis may reveal surprising insights about the subject of interest. For instance, the field of multi-criteria decision making (MCDM) studies (among other topics) the problem of how to select the best alternative among a number of competing alternatives. This is an important task in decision making. In such a setting each alternative is described in terms of a set of evaluative criteria. These criteria are associated with weights of importance. Intuitively, one may think that the larger the weight for a criterion is, the more critical that criterion should be. However, this may not be the case. It is important to distinguish here the notion of criticality with that of importance. By critical, we mean that a criterion with small change (as a percentage) in its weight, may cause a significant change of the final solution. It is possible criteria with rather small weights of importance (i.e., ones that are not so important in that respect) to be much more critical in a given situation than ones with larger weights.[3][4] That is, a sensitivity analysis may shed light into issues not anticipated at the beginning of a study. This, in turn, may dramatically improve the effectiveness of the initial study and assist in the successful implementation of the final solution.
There are several possible procedures to perform uncertainty (UA) and sensitivity analysis (SA). Important classes of methods are:
where the subscript indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling[5][6] and Automated Differentiation[7] are methods in this class.
Often (e.g. in sampling-based methods) UA and SA are performed jointly by executing the model repeatedly for combination of factor values sampled with some probability distribution. The following steps can be listed:
In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer (1990) [22]:
I have proposed a form of organized sensitivity analysis that I call ‘global sensitivity analysis’ in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful.
Note Leamer’s emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be suppressed lest outputs become indeterminate.[23]
In a sensitivity analysis, a Type I error is assessing as important a non-important factor and a Type II error is assessing as non-important an important factor. A Type III error corresponds to analysing the wrong problem, e.g. via an incorrect specification of the input uncertainties. Possible pitfalls in a sensitivity analysis are:
In sensitivity analysis a common approach is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output. OAT customarily involves:[24]
This appears a logical approach as any change observed in the output will unambiguously be due to the single factor changed. Furthermore by changing one factor at a time one can keep all other factors fixed to their central or baseline value. This increases the comparability of the results (all ‘effects’ are computed with reference to the same central point in space) and minimizes the chances of computer programme crashes, more likely when several input factors are changed simultaneously.[24] The later occurrence is particularly annoying to modellers as in this case one does not know which factor's variation caused the model to crash.
The paradox is that this approach, apparently sound, is non-explorative, with exploration decreasing rapidly with the number of factors. With two factors, and hence in two dimensions, the OAT explores (partially) a circle instead of the full square (see figure). In this case one step along the abscissa moving from the origin, followed by a similar step along the ordinate—always moving from the origin, will leave us inside the circle and will never take us to the gray corners.
In k dimensions, the volume of the hyper-sphere included in (and tangent to) the unitary hyper-cube divided by that of the hyper-cube itself, goes rapidly to zero (e.g. it is less than 1% already for k = 10, see Figure). Note also that all OAT points are at most a distance one from the origin by design. Given that the diagonal of the hypercube is in dimensions, if the points are distributed randomly there will be points (in the corners) which are distant from the origin . In ten dimensions there are corners.
Latin hypercube sampling is often used in contexts where researchers feel the assumption of independent-uncertainty is too strong and it is desirable to explore corners of the factor-space.
While uncertainty analysis studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weights more on the study's conclusions. For example, several guidelines for modelling (see e.g. one from the US EPA) or for impact assessment (see one from the European Commission) prescribe sensitivity analysis as a tool to ensure the quality of the modelling/assessment.
The problem setting in sensitivity analysis has strong similarities with design of experiments. In design of experiments one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.
Sensitivity analysis can be used
It provides as well information on:
Sensitivity Analysis is common in physics and chemistry,[25] in financial applications, risk analysis, signal processing, neural networks and any area where models are developed. Sensitivity analysis can also be used in model-based policy assessment studies . Sensitivity analysis can be used to assess the robustness of composite indicators ,[26] also known as indices, such as the Environmental Performance Index.
Computer environmental models are increasingly used in a wide variety of studies and applications. For example, global climate model are used for both short term weather forecasts and long term climate change.
Moreover, computer models are increasingly used for environmental decision making at a local scale, for example for assessing the impact of a waste water treatment plant on a river flow, or for assessing the behavior and life length of bio-filters for contaminated waste water.
In both cases sensitivity analysis may help understanding the contribution of the various sources of uncertainty to the model output uncertainty and system performance in general. In these cases, depending on model complexity, different sampling strategies may be advisable and traditional sensitivity indexes have to be generalized to cover multivariate sensitivity analysis, heteroskedastic effects and correlated inputs.
In a decision problem, the analyst may want to identify cost drivers as well as other quantities for which we need to acquire better knowledge in order to make an informed decision. On the other hand, some quantities have no influence on the predictions, so that we can save resources at no loss in accuracy by relaxing some of the conditions. See Corporate finance: Quantifying uncertainty. Sensitivity analysis can help in a variety of other circumstances which can be handled by the settings illustrated below:
However there are also some problems associated with sensitivity analysis in the business context:
In modern econometrics the use of sensitivity analysis to anticipate criticism is the subject of one of the ten commandments of applied econometrics (from Kennedy, 2007[27] ):
Thou shall confess in the presence of sensitivity. Corollary: Thou shall anticipate criticism [···] When reporting a sensitivity analysis, researchers should explain fully their specification search so that the readers can judge for themselves how the results may have been affected. This is basically an ‘honesty is the best policy’ approach, advocated by Leamer, (1978[28]).
With the accumulation of knowledge about kinetic mechanisms under investigation and with the advance of power of modern computing technologies, detailed complex kinetic models are increasingly used as predictive tools and as aids for understanding the underlying phenomena. A kinetic model is usually described by a set of differential equations representing the concentration-time relationship. Sensitivity analysis has been proven to be a powerful tool to investigate a complex kinetic model.[29][30]
Kinetic parameters are frequently determined from experimental data via nonlinear estimation. Sensitivity analysis can be used for optimal experimental design, e.g. determining initial conditions, measurement positions, and sampling time, to generate informative data which are critical to estimation accuracy. A great number of parameters in a complex model can be candidates for estimation but not all are estimable. Sensitivity analysis can be used to identify the influential parameters which can be determined from available data while screening out the unimportant ones. Sensitivity analysis can also be used to identify the redundant species and reactions allowing model reduction.
In a meta analysis, a sensitivity analysis tests if the results are sensitive to restrictions on the data included. Common examples are large trials only, higher quality trials only, and more recent trials only. If results are consistent it provides stronger evidence of an effect and of generalizability.[31]